To analyze algorithms consistently, we use a standardized model where every basic CPU operation takes one unit of time.
- The previous slide established that algorithm complexity $O(f(n))$ translates directly into the physical execution of instructions by the CPU. To standardize this, we use the Random Access Machine (RAM) Model.
- This model assumes a generic, single-processor architecture where every basic operation takes one unit of time.
- Initialization/Assignment (Cost 1): Setting initial values for variables, like loop counters or pointers (e.g., low = 0).
- Arithmetic Operations (Cost 1): Basic math like addition, subtraction, multiplication, and division (e.g., calculating mid = (low + high) // 2).
- Logic Operations (Cost 1): Comparisons that control program flow (e.g., checking if $A[mid] < t$).
- Data Movement (Cost 1): Accessing memory to fetch or store data, such as reading an element from an array (e.g., $A[mid]$).
Crucial Insight: When we state Linear Search is $O(n)$, we are asserting that in the worst case, the algorithm executes approximately $k \cdot n$ total atomic operations (where $k$ is a constant), regardless of the specific CPU speed.
Atomic Operations in the RAM Model
| Operation Type | Example | Cost |
|---|---|---|
| Initialization/Assignment | low = 0 |
1 |
| Arithmetic | (low + high) // 2 |
1 per op |
| Logic/Comparison | A[mid] < t |
1 |
| Data Movement (Memory) | A[mid] |
1 |